123 research outputs found

    How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray CT

    Get PDF
    We introduce phase-diagram analysis, a standard tool in compressed sensing, to the X-ray CT community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In compressed sensing a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling: First we demonstrate that there are cases where X-ray CT empirically performs comparable with an optimal compressed sensing strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared to standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization.Comment: 24 pages, 13 figure

    Model-based control algorithms for the quadruple tank system: An experimental comparison

    Full text link
    We compare the performance of proportional-integral-derivative (PID) control, linear model predictive control (LMPC), and nonlinear model predictive control (NMPC) for a physical setup of the quadruple tank system (QTS). We estimate the parameters in a continuous-discrete time stochastic nonlinear model for the QTS using a prediction-error-method based on the measured process data and a maximum likelihood (ML) criterion. In the NMPC algorithm, we use this identified continuous-discrete time stochastic nonlinear model. The LMPC algorithm is based on a linearization of this nonlinear model. We tune the PID controller using Skogestad's IMC tuning rules using a transfer function representation of the linearized model. Norms of the observed tracking errors and the rate of change of the manipulated variables are used to compare the performance of the control algorithms. The LMPC and NMPC perform better than the PID controller for a predefined time-varying setpoint trajectory. The LMPC and NMPC algorithms have similar performance.Comment: 6 pages, 5 figures, 3 tables, to be published in Foundations of Computer Aided Process Operations / Chemical Process Control (FOCAPO/CPC 2023). Hilton San Antonio Hill Country, San Antonio, Texa

    Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm

    Get PDF
    The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented.Comment: Resubmitted to Physics in Medicine and Biology. Text has been modified according to referee comments, and typos in the equations have been correcte

    Enhanced hyperspectral tomography for bioimaging by spatiospectral reconstruction.

    Get PDF
    From Europe PMC via Jisc Publications RouterHistory: ppub 2021-10-01, epub 2021-10-21Publication status: PublishedHere we apply hyperspectral bright field imaging to collect computed tomographic images with excellent energy resolution (~ 1 keV), applying it for the first time to map the distribution of stain in a fixed biological sample through its characteristic K-edge. Conventionally, because the photons detected at each pixel are distributed across as many as 200 energy channels, energy-selective images are characterised by low count-rates and poor signal-to-noise ratio. This means high X-ray exposures, long scan times and high doses are required to image unique spectral markers. Here, we achieve high quality energy-dispersive tomograms from low dose, noisy datasets using a dedicated iterative reconstruction algorithm. This exploits the spatial smoothness and inter-channel structural correlation in the spectral domain using two carefully chosen regularisation terms. For a multi-phase phantom, a reduction in scan time of 36 times is demonstrated. Spectral analysis methods including K-edge subtraction and absorption step-size fitting are evaluated for an ex vivo, single (iodine)-stained biological sample, where low chemical concentration and inhomogeneous distribution can affect soft tissue segmentation and visualisation. The reconstruction algorithms are available through the open-source Core Imaging Library. Taken together, these tools offer new capabilities for visualisation and elemental mapping, with promising applications for multiply-stained biological specimens

    Are local wind power resources well estimated?

    Get PDF
    Planning and financing of wind power installations require very importantly accurate resource estimation in addition to a number of other considerations relating to environment and economy. Furthermore, individual wind energy installations cannot in general be seen in isolation. It is well known that the spacing of turbines in wind farms is critical for maximum power production. It is also well established that the collective effect of wind turbines in large wind farms or of several wind farms can limit the wind power extraction downwind. This has been documented by many years of production statistics. For the very large, regional sized wind farms, a number of numerical studies have pointed to additional adverse changes to the regional wind climate, most recently by the detailed studies of Adams and Keith [1]. They show that the geophysical limit to wind power production is likely to be lower than previously estimated. Although this problem is of far future concern, it has to be considered seriously. In their paper they estimate that a wind farm larger than 100 km ^2 is limited to about 1 W m ^-2 . However, a 20 km ^2 off shore farm, Horns Rev 1, has in the last five years produced 3.98 W m ^-2  [5]. In that light it is highly unlikely that the effects pointed out by [1] will pose any immediate threat to wind energy in coming decades. Today a number of well-established mesoscale and microscale models exist for estimating wind resources and design parameters and in many cases they work well. This is especially true if good local data are available for calibrating the models or for their validation. The wind energy industry is still troubled by many projects showing considerable negative discrepancies between calculated and actually experienced production numbers and operating conditions. Therefore it has been decided on a European Union level to launch a project, ‘The New European Wind Atlas’, aiming at reducing overall uncertainties in determining wind conditions. The project is structured around three areas of work, to be implemented in parallel. One of the great challenges to the project is the application of mesoscale models for wind resource calculation, which is by no means a simple matter [3]. The project will use global reanalysis data as boundary conditions. These datasets, which are time series of the large-scale meteorological situation covering decades, have been created by assimilation of measurement data from around the globe in a dynamical consistent fashion using large-scale numerical models. For wind energy, the application of the reanalysis datasets is as a long record of the large-scale wind conditions. The large-scale reanalyses are performed in only a few global weather prediction centres using models that have been developed over many years, and which are still being developed and validated and are being used in operational services. Mesoscale models are more diverse, but nowadays quite a number have a proven track record in applications such as regional weather prediction and also wind resource assessment. There are still some issues, and use of model results without proper validation may lead to gross errors. For resource assessment it is necessary to include direct validation with in situ observed wind data over sufficiently long periods. In doing so, however, the mesoscale model output must be downscaled using some microscale physical or empirical/statistical model. That downscaling process is not straightforward, and the microscale models themselves tend to disagree in some terrain types as shown by recent blind tests [4]. All these ‘technical’ details and choices, not to mention the model formulation itself, the numerical schemes used, and the effective spatial and temporal resolution, can have a significant impact on the results. These problems, as well as the problem of how uncertainties are propagated through the model chain to the calculated wind resources, are central in the work with the New European Wind Atlas. The work of [1] shows that when wind energy has been implemented on a very massive scale, it will affect the power production from entire regions and that has to be taken into account. References [1] Adams A S and Keith D W 2013 Are global wind power resource estimates overstated? Environ. Res. Lett. 8 015021 [2] 2011 A New EU Wind Energy Atlas: Proposal for an ERANET+ Project (Produced by the TPWind Secretariat) Nov. [3] Petersen E L Troen I 2012 Wind conditions and resource assessment WIREs Energy Environ. 1 206–17 [4] Bechmann A, Sørensen N N, Berg J, Mann J Rethore P-E 2011 The Bolund experiment, part II: blind comparison of microscale flow models Boundary-Layer Meteorol. 141 245–71 [5] http://www.lorc.dk/offshore-wind-farms-map/horns-rev-1 http://www.ens.d

    Core Imaging Library - Part II:multichannel reconstruction for dynamic and spectral tomography

    Get PDF
    The newly developed core imaging library (CIL) is a flexible plug and play library for tomographic imaging with a specific focus on iterative reconstruction. CIL provides building blocks for tailored regularized reconstruction algorithms and explicitly supports multichannel tomographic data. In the first part of this two-part publication, we introduced the fundamentals of CIL. This paper focuses on applications of CIL for multichannel data, e.g. dynamic and spectral. We formalize different optimization problems for colour processing, dynamic and hyperspectral tomography and demonstrate CIL’s capabilities for designing state-of-the-art reconstruction methods through case studies and code snapshots

    The cooling of atomic and molecular gas in DR21

    Get PDF
    We present an overview of a high-mass star formation region through the major (sub-)mm, and far-infrared cooling lines to gain insight into the physical conditions and the energy budget of the molecular cloud. We used the KOSMA 3m telescope to map the core (10×1410'\times 14') of the Galactic star forming region DR 21/DR 21 (OH) in the Cygnus X region in the two fine structure lines of atomic carbon CI and four mid-JJ transitions of CO and 13^{13}CO, and CS J=7\TO6. These observations have been combined with FCRAO J=1\TO0 observations of 13^{13}CO and C18^{18}O. Five positions, including DR21, DR21 (OH), and DR21 FIR1, were observed with the ISO/LWS grating spectrometer in the \OI 63 and 145 μ\mum lines, the \CII 158 μ\mum line, and four high-JJ CO lines. We discuss the intensities and line ratios at these positions and apply Local Thermal Equilibrium (LTE) and non-LTE analysis methods in order to derive physical parameters such as masses, densities and temperatures. The CO line emission has been modeled up to J=20. From non-LTE modeling of the low- to high-JJ CO lines we identify two gas components, a cold one at temperatures of T_\RM{kin}\sim 30-40 K, and one with T_\RM{kin}\sim 80-150 K at a local clump density of about n(H2_2)104106\sim 10^4-10^6 cm3^{-3}. While the cold quiescent component is massive containing typically more than 94 % of the mass, the warm, dense, and turbulent gas is dominated by mid- and high-JJ CO line emission and its large line widths. The medium must be clumpy with a volume-filling of a few percent. The CO lines are found to be important for the cooling of the cold molecular gas, e.g. at DR21 (OH). Near the outflow of the UV-heated source DR21, the gas cooling is dominated by line emission of atomic oxygen and of CO
    corecore